Goto

Collaborating Authors

 mental content


Representation and Interpretation in Artificial and Natural Computing

Pineda, Luis A.

arXiv.org Artificial Intelligence

Artificial computing machinery transforms representations through an objective process, to be interpreted subjectively by humans, so the machine and the interpreter are different entities, but in the putative natural computing both processes are performed by the same agent. The method or process that transforms a representation is called here \emph{the mode of computing}. The mode used by digital computers is the algorithmic one, but there are others, such as quantum computers and diverse forms of non-conventional computing, and there is an open-ended set of representational formats and modes that could be used in artificial and natural computing. A mode based on a notion of computing different from Turing's may perform feats beyond what the Turing Machine does but the modes would not be of the same kind and could not be compared. For a mode of computing to be more powerful than the algorithmic one, it ought to compute functions lacking an effective algorithm, and Church Thesis would not hold. Here, a thought experiment including a computational demon using a hypothetical mode for such an effect is presented. If there is natural computing, there is a mode of natural computing whose properties may be causal to the phenomenological experience. Discovering it would come with solving the hard problem of consciousness; but if it turns out that such a mode does not exist, there is no such thing as natural computing, and the mind is not a computational process.


Reading Users' Minds from What They Say: An Investigation into LLM-based Empathic Mental Inference

Zhu, Qihao, Chong, Leah, Yang, Maria, Luo, Jianxi

arXiv.org Artificial Intelligence

In human-centered design, developing a comprehensive and in-depth understanding of user experiences, i.e., empathic understanding, is paramount for designing products that truly meet human needs. Nevertheless, accurately comprehending the real underlying mental states of a large human population remains a significant challenge today. This difficulty mainly arises from the trade-off between depth and scale of user experience research: gaining in-depth insights from a small group of users does not easily scale to a larger population, and vice versa. This paper investigates the use of Large Language Models (LLMs) for performing mental inference tasks, specifically inferring users' underlying goals and fundamental psychological needs (FPNs). Baseline and benchmark datasets were collected from human users and designers to develop an empathic accuracy metric for measuring the mental inference performance of LLMs. The empathic accuracy of inferring goals and FPNs of different LLMs with varied zero-shot prompt engineering techniques are experimented against that of human designers. Experimental results suggest that LLMs can infer and understand the underlying goals and FPNs of users with performance comparable to that of human designers, suggesting a promising avenue for enhancing the scalability of empathic design approaches through the integration of advanced artificial intelligence technologies. This work has the potential to significantly augment the toolkit available to designers during human-centered design, enabling the development of both large-scale and in-depth understanding of users' experiences.


AI-Powered 'Thought Decoders' Won't Just Read Your Mind--They'll Change It

WIRED

Now, there's concern that neuroscientists might be doing the same by developing technologies capable of "decoding" our thoughts and laying bare the hidden contents of our mind. Though neural decoding has been in development for decades, it broke into popular culture earlier this year, thanks to a slew of high-profile papers. In one, researchers used data from implanted electrodes to reconstruct the Pink Floyd song participants were listening to. In another paper, published in Nature, scientists combined brain scans with AI-powered language generators (like those undergirding ChatGPT and similar tools) to translate brain activity into coherent, continuous sentences. This method didn't require invasive surgery, and yet it was able to reconstruct the meaning of a story from purely imagined, rather than spoken or heard, speech.

  Genre: Research Report (0.36)
  Industry: Health & Medicine > Therapeutic Area > Neurology (0.95)

John Searle's Syntax-vs.-Semantics Argument Against Artificial Intelligence (AI)

#artificialintelligence

This is a simple introduction to the philosopher John Searle's main argument against artificial intelligence (AI). This means that it doesn't come down either for or against that argument. The main body of the Searle's argument is how he distinguishes syntax from syntax. Thus the well-known Chinese Room scenario is simply Searle's means of expressing what he sees as the vital distinction to be made between syntax and semantics when it comes to debates about computers and AI generally. One way in which John Searle puts his case is by reference to reference. That position is summed up simply when Searle (in his'Minds, Brains, and Programs' of 1980) writes: "Whereas the English subsystem knows that'hamburgers' refers to hamburgers, the Chinese subsystem knows only that'squiggle squiggle' is followed by'squoggle squoggle'." So whereas what Searle calls the "English subsystem" involves a complex reference-relation which involved entities in the world, mental states, knowledge of meanings, intentionality, consciousness, memory and other such things; the Chinese subsystem is only following rules.